33 research outputs found

    Multi-Index Monte Carlo: When Sparsity Meets Sampling

    Full text link
    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles's seminal work, we use in MIMC high-order mixed differences instead of using first-order differences as in MLMC to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles's MLMC analysis and which increase the domain of the problem parameters for which we achieve the optimal convergence, O(TOL2).\mathcal{O}(\text{TOL}^{-2}). Moreover, in MIMC, the rate of increase of required memory with respect to TOL\text{TOL} is independent of the number of directions up to a logarithmic term which allows far more accurate solutions to be calculated for higher dimensions than what is possible when using MLMC. We motivate the setting of MIMC by first focusing on a simple full tensor index set. We then propose a systematic construction of optimal sets of indices for MIMC based on properly defined profits that in turn depend on the average cost per sample and the corresponding weak error and variance. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be the total degree (TD) type. In some cases, using optimal index sets, MIMC achieves a better rate for the computational complexity than the corresponding rate when using full tensor index sets..

    Nested Multilevel Monte Carlo with Biased and Antithetic Sampling

    Full text link
    We consider the problem of estimating a nested structure of two expectations taking the form U0=E[max{U1(Y),π(Y)}]U_0 = E[\max\{U_1(Y), \pi(Y)\}], where U1(Y)=E[X  Y]U_1(Y) = E[X\ |\ Y]. Terms of this form arise in financial risk estimation and option pricing. When U1(Y)U_1(Y) requires approximation, but exact samples of XX and YY are available, an antithetic multilevel Monte Carlo (MLMC) approach has been well-studied in the literature. Under general conditions, the antithetic MLMC estimator obtains a root mean squared error ε\varepsilon with order ε2\varepsilon^{-2} cost. If, additionally, XX and YY require approximate sampling, careful balancing of the various aspects of approximation is required to avoid a significant computational burden. Under strong convergence criteria on approximations to XX and YY, randomised multilevel Monte Carlo techniques can be used to construct unbiased Monte Carlo estimates of U1U_1, which can be paired with an antithetic MLMC estimate of U0U_0 to recover order ε2\varepsilon^{-2} computational cost. In this work, we instead consider biased multilevel approximations of U1(Y)U_1(Y), which require less strict assumptions on the approximate samples of XX. Extensions to the method consider an approximate and antithetic sampling of YY. Analysis shows the resulting estimator has order ε2\varepsilon^{-2} asymptotic cost under the conditions required by randomised MLMC and order ε2logε3\varepsilon^{-2}|\log\varepsilon|^3 cost under more general assumptions.Comment: 28 pages, 2 figure

    Multi-index Stochastic Collocation convergence rates for random PDEs with parametric regularity

    Full text link
    We analyze the recent Multi-index Stochastic Collocation (MISC) method for computing statistics of the solution of a partial differential equation (PDEs) with random data, where the random coefficient is parametrized by means of a countable sequence of terms in a suitable expansion. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data and, naturally, the error analysis uses the joint regularity of the solution with respect to both the variables in the physical domain and parametric variables. In MISC, the number of problem solutions performed at each discretization level is not determined by balancing the spatial and stochastic components of the error, but rather by suitably extending the knapsack-problem approach employed in the construction of the quasi-optimal sparse-grids and Multi-index Monte Carlo methods. We use a greedy optimization procedure to select the most effective mixed differences to include in the MISC estimator. We apply our theoretical estimates to a linear elliptic PDEs in which the log-diffusion coefficient is modeled as a random field, with a covariance similar to a Mat\'ern model, whose realizations have spatial regularity determined by a scalar parameter. We conduct a complexity analysis based on a summability argument showing algebraic rates of convergence with respect to the overall computational work. The rate of convergence depends on the smoothness parameter, the physical dimensionality and the efficiency of the linear solver. Numerical experiments show the effectiveness of MISC in this infinite-dimensional setting compared with the Multi-index Monte Carlo method and compare the convergence rate against the rates predicted in our theoretical analysis

    Sub-sampling and other considerations for efficient risk estimation in large portfolios

    Full text link
    Computing risk measures of a financial portfolio comprising thousands of options is a challenging problem because (a) it involves a nested expectation requiring multiple evaluations of the loss of the financial portfolio for different risk scenarios and (b) evaluating the loss of the portfolio is expensive and the cost increases with its size. In this work, we look at applying Multilevel Monte Carlo (MLMC) with adaptive inner sampling to this problem and discuss several practical considerations. In particular, we discuss a sub-sampling strategy that results in a method whose computational complexity does not increase with the size of the portfolio. We also discuss several control variates that significantly improve the efficiency of MLMC in our setting

    Multilevel Path Branching for Digital Options

    Full text link
    We propose a new Monte Carlo-based estimator for digital options with assets modelled by a stochastic differential equation (SDE). The new estimator is based on repeated path splitting and relies on the correlation of approximate paths of the underlying SDE that share parts of a Brownian path. Combining this new estimator with Multilevel Monte Carlo (MLMC) leads to an estimator with a complexity that is similar to the complexity of a MLMC estimator when applied to options with Lipschitz payoffs

    Multilevel nested simulation for efficient risk estimation

    Get PDF
    We investigate the problem of computing a nested expectation of the form P[E[XY]0]=E[H(E[XY])]\mathbb{P} {[\mathbb{E}[{X | Y]} \geq 0]} = \mathbb{E}{[{{H}}} ({\mathbb{E}{[X| Y])]}} where H{{H}} is the Heaviside function. This nested expectation appears, for example, when estimating the probability of a large loss from a financial portfolio. We present a method that combines the idea of using Multilevel Monte Carlo (MLMC) for nested expectations with the idea of adaptively selecting the number of samples in the approximation of the inner expectation, as proposed by [M. Broadie, Y. Du, and C. C. Moallemi, Manag. Sci., 57 (2011), pp. 1172--1194]. We propose and analyze an algorithm that adaptively selects the number of inner samples on each MLMC level and prove that the resulting MLMC method with adaptive sampling has an O(ε2logε2)\mathcal{O}({{\varepsilon}^{-2}|{\rm log}\,\varepsilon|^2}) complexity to achieve a root mean-squared error ε{\varepsilon}. The theoretical analysis is verified by numerical experiments on a simple model problem. We also present a stochastic root-finding algorithm that, combined with our adaptive methods, can be used to compute other risk measures such as Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR), the latter being achieved with O(ε2)\mathcal{O}({{\varepsilon}^{-2}}) complexity. Read More: https://epubs.siam.org/doi/10.1137/18M117318

    Optimization of mesh hierarchies in Multilevel Monte Carlo samplers

    Full text link
    We perform a general optimization of the parameters in the Multilevel Monte Carlo (MLMC) discretization hierarchy based on uniform discretization methods with general approximation orders and computational costs. We optimize hierarchies with geometric and non-geometric sequences of mesh sizes and show that geometric hierarchies, when optimized, are nearly optimal and have the same asymptotic computational complexity as non-geometric optimal hierarchies. We discuss how enforcing constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. These constraints include an upper and a lower bound on the mesh size or enforcing that the number of samples and the number of discretization elements are integers. We also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. To provide numerical grounds for our theoretical results, we apply these optimized hierarchies together with the Continuation MLMC Algorithm. The first example considers a three-dimensional elliptic partial differential equation with random inputs. Its space discretization is based on continuous piecewise trilinear finite elements and the corresponding linear system is solved by either a direct or an iterative solver. The second example considers a one-dimensional It\^o stochastic differential equation discretized by a Milstein scheme

    A Continuation Multilevel Monte Carlo algorithm

    Full text link
    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding weak and strong errors. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients
    corecore